While many systems have been developed to train Graph Neural Networks (GNNs), efficient model inference and evaluation remain to be addressed. For instance, using the widely adopted node-wise approach, model evaluation can account for up to 94% of the time in the end-to-end training process due to neighbor explosion, which means that a node accesses its multi-hop neighbors. On the other hand, layer-wise inference avoids the neighbor explosion problem by conducting inference layer by layer such that the nodes only need their one-hop neighbors in each layer. However, implementing layer-wise inference requires substantial engineering efforts because users need to manually decompose a GNN model into layers for computation and split workload into batches to fit into device memory. In this paper, we develop Deep Graph Inference (DGI) -- a system for easy and efficient GNN model inference, which automatically translates the training code of a GNN model for layer-wise execution. DGI is general for various GNN models and different kinds of inference requests, and supports out-of-core execution on large graphs that cannot fit in CPU memory. Experimental results show that DGI consistently outperforms layer-wise inference across different datasets and hardware settings, and the speedup can be over 1,000x.
translated by 谷歌翻译
Automated detecting lung infections from computed tomography (CT) data plays an important role for combating COVID-19. However, there are still some challenges for developing AI system. 1) Most current COVID-19 infection segmentation methods mainly relied on 2D CT images, which lack 3D sequential constraint. 2) Existing 3D CT segmentation methods focus on single-scale representations, which do not achieve the multiple level receptive field sizes on 3D volume. 3) The emergent breaking out of COVID-19 makes it hard to annotate sufficient CT volumes for training deep model. To address these issues, we first build a multiple dimensional-attention convolutional neural network (MDA-CNN) to aggregate multi-scale information along different dimension of input feature maps and impose supervision on multiple predictions from different CNN layers. Second, we assign this MDA-CNN as a basic network into a novel dual multi-scale mean teacher network (DM${^2}$T-Net) for semi-supervised COVID-19 lung infection segmentation on CT volumes by leveraging unlabeled data and exploring the multi-scale information. Our DM${^2}$T-Net encourages multiple predictions at different CNN layers from the student and teacher networks to be consistent for computing a multi-scale consistency loss on unlabeled data, which is then added to the supervised loss on the labeled data from multiple predictions of MDA-CNN. Third, we collect two COVID-19 segmentation datasets to evaluate our method. The experimental results show that our network consistently outperforms the compared state-of-the-art methods.
translated by 谷歌翻译
我们提出了一种简单而有效的方法,用于培训命名实体识别(NER)模型,该模型在业务电话交易记录上运行,该转录本包含噪音,这是由于口语对话的性质和自动语音识别的工件。我们首先通过有限数量的成绩单微调卢克(Luke),这是一种最先进的命名实体识别(NER)模型弱标记的数据和少量的人类注销数据。该模型可以达到高精度,同时还满足了将包含在商业电话产品中的实际限制:在具有成本效益的CPU而不是GPU上部署时实时性能。
translated by 谷歌翻译
Localizing anatomical landmarks are important tasks in medical image analysis. However, the landmarks to be localized often lack prominent visual features. Their locations are elusive and easily confused with the background, and thus precise localization highly depends on the context formed by their surrounding areas. In addition, the required precision is usually higher than segmentation and object detection tasks. Therefore, localization has its unique challenges different from segmentation or detection. In this paper, we propose a zoom-in attentive network (ZIAN) for anatomical landmark localization in ocular images. First, a coarse-to-fine, or "zoom-in" strategy is utilized to learn the contextualized features in different scales. Then, an attentive fusion module is adopted to aggregate multi-scale features, which consists of 1) a co-attention network with a multiple regions-of-interest (ROIs) scheme that learns complementary features from the multiple ROIs, 2) an attention-based fusion module which integrates the multi-ROIs features and non-ROI features. We evaluated ZIAN on two open challenge tasks, i.e., the fovea localization in fundus images and scleral spur localization in AS-OCT images. Experiments show that ZIAN achieves promising performances and outperforms state-of-the-art localization methods. The source code and trained models of ZIAN are available at https://github.com/leixiaofeng-astar/OMIA9-ZIAN.
translated by 谷歌翻译
3D车道检测是自动驾驶系统的组成部分。以前的CNN和基于变压器的方法通常首先从前视图图像中生成鸟类视图(BEV)特征映射,然后使用带有BEV功能映射的子网络作为输入来预测3D车道。这种方法需要在BEV和前视图之间进行明确的视图转换,这本身仍然是一个具有挑战性的问题。在本文中,我们提出了一种基于单阶段变压器的方法,该方法直接计算3D车道参数并可以规避困难的视图变换步骤。具体而言,我们通过使用曲线查询来将3D车道检测作为曲线传播问题。 3D车道查询由动态和有序的锚点集表示。通过这种方式,在变压器解码器迭代中具有曲线表示的查询可完善3D车道检测结果。此外,引入了曲线交叉意见模块,以计算曲线查询和图像特征之间的相似性。此外,提供了可以捕获曲线查询更多相对图像特征的上下文采样模块,以进一步提高3D车道检测性能。我们评估了合成数据集和现实数据集的3D车道检测方法,实验结果表明,与最先进的方法相比,我们的方法实现了有希望的性能。每个组件的有效性也通过消融研究验证。
translated by 谷歌翻译
当系统中有某些未知术语和隐藏的物理机制时,基于第一原理的复杂物理系统的管理方程可能会非常具有挑战性。在这项工作中,我们采用深度学习体系结构来学习基于从完全动力学模型中获取的数据的等离子体系统的流体部分微分方程(PDE)。证明了学到的多臂流体PDE可以融合诸如Landau阻尼等动力学效应。基于学习的流体闭合,数据驱动的多音阶流体建模可以很好地再现从完全动力学模型中得出的所有物理量。Landau阻尼的计算阻尼率与完全动力学的模拟和线性理论一致。用于复杂物理系统的PDE的数据驱动的流体建模可以应用于改善流体闭合并降低全球系统多规模建模的计算成本。
translated by 谷歌翻译
在前景点(即物体)和室外激光雷达点云中的背景点之间通常存在巨大的失衡。它阻碍了尖端的探测器专注于提供信息的区域,以产生准确的3D对象检测结果。本文提出了一个新的对象检测网络,该对象检测网络通过称为PV-RCNN ++的语义点 - 素voxel特征相互作用。与大多数现有方法不同,PV-RCNN ++探索了语义信息,以增强对象检测的质量。首先,提出了一个语义分割模块,以保留更具歧视性的前景关键。这样的模块将指导我们的PV-RCNN ++在关键区域集成了更多与对象相关的点和体素特征。然后,为了使点和体素有效相互作用,我们利用基于曼哈顿距离的体素查询来快速采样关键点周围的体素特征。与球查询相比,这种体素查询将降低从O(N)到O(K)的时间复杂性。此外,为了避免仅学习本地特征,基于注意力的残留点网模块旨在扩展接收场,以将相邻的素素特征适应到关键点中。 Kitti数据集的广泛实验表明,PV-RCNN ++达到81.60 $ \%$,40.18 $ \%$,68.21 $ \%$ \%$ 3D地图在汽车,行人和骑自行车的人方面,可以在州,甚至可以在州立骑行者,甚至更好地绩效-艺术。
translated by 谷歌翻译
本文回顾了AIM 2022上压缩图像和视频超级分辨率的挑战。这项挑战包括两条曲目。轨道1的目标是压缩图像的超分辨率,轨迹〜2靶向压缩视频的超分辨率。在轨道1中,我们使用流行的数据集DIV2K作为培训,验证和测试集。在轨道2中,我们提出了LDV 3.0数据集,其中包含365个视频,包括LDV 2.0数据集(335个视频)和30个其他视频。在这一挑战中,有12支球队和2支球队分别提交了赛道1和赛道2的最终结果。所提出的方法和解决方案衡量了压缩图像和视频上超分辨率的最先进。提出的LDV 3.0数据集可在https://github.com/renyang-home/ldv_dataset上找到。此挑战的首页是在https://github.com/renyang-home/aim22_compresssr。
translated by 谷歌翻译
拓扑不平衡是由标记节点的不均匀拓扑位置引起的一个特异性不平衡问题,它大大损害了GNN的性能。什么拓扑不平衡意味着如何衡量其对图形学习的影响。在本文中,从全球视图中,我们对监督信息分布的全球视图提供了对拓扑 - 不平衡的新理解,从不足和过度划分的角度来看,这激发了两个定量指标作为测量。鉴于我们的分析,我们提出了一个新颖的位置感知的图形结构学习框架,该框架名为柔和,该框架直接优化了信息传播路径并解决了本质上解决拓扑 - 不平衡问题。我们的关键见解是增强同一类中节点的连接性,以获取更多的监督信息,从而减轻不足和过度的现象。具体而言,我们设计了一个基于锚的位置编码机制,该机制可以更好地结合相对拓扑位置并通过最大化标签影响来增强类内部电感偏置。我们进一步提出了作为边缘权重的阶级冲突度量,这有利于不同节点类别的分离。广泛的实验表明,在不同的数据注释方案中增强GNNS的功率方面,柔和的能力具有较高的潜力和适应性。
translated by 谷歌翻译
蒙面图像建模(MIM)在各种视觉任务上取得了令人鼓舞的结果。但是,学到的表示形式的有限可区分性表现出来,使一个更强大的视力学习者还有很多值得一试。为了实现这一目标,我们提出了对比度蒙面的自动编码器(CMAE),这是一种新的自我监督的预训练方法,用于学习更全面和有能力的视觉表示。通过详细统一的对比度学习(CL)和掩盖图像模型(MIM),CMAE利用了它们各自的优势,并以强大的实例可辨别性和局部的可感知来学习表示形式。具体而言,CMAE由两个分支组成,其中在线分支是不对称的编码器编码器,而目标分支是动量更新的编码器。在培训期间,在线编码器从蒙面图像的潜在表示中重建了原始图像,以学习整体特征。馈送完整图像的目标编码器通过其在线学习通过对比度学习增强了功能可区分性。为了使CL与MIM兼容,CMAE引入了两个新组件,即用于生成合理的正视图和特征解码器的像素移位,以补充对比度对的特征。多亏了这些新颖的设计,CMAE可以有效地提高了MIM对应物的表示质量和转移性能。 CMAE在图像分类,语义分割和对象检测的高度竞争基准上实现了最先进的性能。值得注意的是,CMAE-BASE在Imagenet上获得了$ 85.3 \%$ $ TOP-1的准确性和$ 52.5 \%$ MIOU的ADE20K,分别超过了$ 0.7 \%\%$ $和$ 1.8 \%$ $。代码将公开可用。
translated by 谷歌翻译